This tutorial illustrates the core visualization utilities available in Ax.
import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties
from ax.modelbridge.cross_validation import cross_validate
from ax.plot.contour import interact_contour
from ax.plot.diagnostic import interact_cross_validation
from ax.plot.scatter import(
interact_fitted,
plot_objective_vs_constraints,
tile_fitted,
)
from ax.plot.slice import plot_slice
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
[INFO 01-05 05:18:28] ax.utils.notebook.plotting: Injecting Plotly library into cell. Do not overwrite or delete cell.
The vizualizations require an experiment object and a model fit on the evaluated data. The routine below is a copy of the Service API tutorial, so the explanation here is omitted. Retrieving the experiment and model objects for each API paradigm is shown in the respective tutorials
noise_sd = 0.1
param_names = [f"x{i+1}" for i in range(6)] # x1, x2, ..., x6
def noisy_hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(p_name) for p_name in param_names])
noise1, noise2 = np.random.normal(0, noise_sd, 2)
return {
"hartmann6": (hartmann6(x) + noise1, noise_sd),
"l2norm": (np.sqrt((x ** 2).sum()) + noise2, noise_sd)
}
ax_client = AxClient()
ax_client.create_experiment(
name="test_visualizations",
parameters=[
{
"name": p_name,
"type": "range",
"bounds": [0.0, 1.0],
}
for p_name in param_names
],
objectives={"hartmann6": ObjectiveProperties(minimize=True)},
outcome_constraints=["l2norm <= 1.25"]
)
[INFO 01-05 05:18:28] ax.service.ax_client: Starting optimization with verbose logging. To disable logging, set the `verbose_logging` argument to `False`. Note that float values in the logs are rounded to 6 decimal points.
[INFO 01-05 05:18:28] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x1. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 01-05 05:18:28] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x2. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 01-05 05:18:28] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x3. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 01-05 05:18:28] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x4. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 01-05 05:18:28] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x5. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 01-05 05:18:28] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter x6. If that is not the expected value type, you can explicity specify 'value_type' ('int', 'float', 'bool' or 'str') in parameter dict.
[INFO 01-05 05:18:28] ax.service.utils.instantiation: Created search space: SearchSpace(parameters=[RangeParameter(name='x1', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x2', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x3', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x4', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x5', parameter_type=FLOAT, range=[0.0, 1.0]), RangeParameter(name='x6', parameter_type=FLOAT, range=[0.0, 1.0])], parameter_constraints=[]).
[INFO 01-05 05:18:28] ax.modelbridge.dispatch_utils: Using Bayesian optimization since there are more ordered parameters than there are categories for the unordered categorical parameters.
[INFO 01-05 05:18:28] ax.modelbridge.dispatch_utils: Calculating the number of remaining initialization trials based on num_initialization_trials=None max_initialization_trials=None num_tunable_parameters=6 num_trials=None use_batch_trials=False
[INFO 01-05 05:18:28] ax.modelbridge.dispatch_utils: calculated num_initialization_trials=12
[INFO 01-05 05:18:28] ax.modelbridge.dispatch_utils: num_completed_initialization_trials=0 num_remaining_initialization_trials=12
[INFO 01-05 05:18:28] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 12 trials, GPEI for subsequent trials]). Iterations after 12 will take longer to generate due to model-fitting.
for i in range(20):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=noisy_hartmann_evaluation_function(parameters))
/home/runner/work/Ax/Ax/ax/core/observation.py:274: FutureWarning:
In a future version of pandas, a length 1 tuple will be returned when iterating over a groupby with a grouper equal to a list of length 1. Don't supply a list with a single grouper to avoid this warning.
[INFO 01-05 05:18:28] ax.service.ax_client: Generated new trial 0 with parameters {'x1': 0.366744, 'x2': 0.137806, 'x3': 0.907762, 'x4': 0.120644, 'x5': 0.765093, 'x6': 0.539649}.
[INFO 01-05 05:18:28] ax.service.ax_client: Completed trial 0 with data: {'hartmann6': (-0.137285, 0.1), 'l2norm': (1.346084, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 1 with parameters {'x1': 0.30605, 'x2': 0.204151, 'x3': 0.748835, 'x4': 0.911823, 'x5': 0.138952, 'x6': 0.611698}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 1 with data: {'hartmann6': (-0.222488, 0.1), 'l2norm': (1.379777, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 2 with parameters {'x1': 0.152408, 'x2': 0.438704, 'x3': 0.36768, 'x4': 0.865215, 'x5': 0.671858, 'x6': 0.307066}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 2 with data: {'hartmann6': (0.00189, 0.1), 'l2norm': (1.262404, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 3 with parameters {'x1': 0.01918, 'x2': 0.653814, 'x3': 0.343057, 'x4': 0.466311, 'x5': 0.647327, 'x6': 0.469115}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 3 with data: {'hartmann6': (-0.109086, 0.1), 'l2norm': (1.130649, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 4 with parameters {'x1': 0.809475, 'x2': 0.488234, 'x3': 0.21657, 'x4': 0.131824, 'x5': 0.428571, 'x6': 0.709214}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 4 with data: {'hartmann6': (-0.389694, 0.1), 'l2norm': (1.337593, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 5 with parameters {'x1': 0.351087, 'x2': 0.893256, 'x3': 0.782497, 'x4': 0.480145, 'x5': 0.524923, 'x6': 0.294222}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 5 with data: {'hartmann6': (-1.215238, 0.1), 'l2norm': (1.413588, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 6 with parameters {'x1': 0.482004, 'x2': 0.95188, 'x3': 0.184514, 'x4': 0.812695, 'x5': 0.273174, 'x6': 0.919145}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 6 with data: {'hartmann6': (-0.1558, 0.1), 'l2norm': (1.696221, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 7 with parameters {'x1': 0.35125, 'x2': 0.960318, 'x3': 0.83981, 'x4': 0.888374, 'x5': 0.032288, 'x6': 0.865092}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 7 with data: {'hartmann6': (0.119311, 0.1), 'l2norm': (1.865424, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 8 with parameters {'x1': 0.554687, 'x2': 0.627969, 'x3': 0.529732, 'x4': 0.349876, 'x5': 0.077459, 'x6': 0.104267}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 8 with data: {'hartmann6': (-0.578865, 0.1), 'l2norm': (1.12481, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 9 with parameters {'x1': 0.516699, 'x2': 0.938206, 'x3': 0.701714, 'x4': 0.551516, 'x5': 0.106875, 'x6': 0.581285}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 9 with data: {'hartmann6': (0.033824, 0.1), 'l2norm': (1.534193, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 10 with parameters {'x1': 0.640282, 'x2': 0.795313, 'x3': 0.948998, 'x4': 0.680851, 'x5': 0.232801, 'x6': 0.563652}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 10 with data: {'hartmann6': (-0.249035, 0.1), 'l2norm': (1.49596, 0.1)}.
[INFO 01-05 05:18:29] ax.service.ax_client: Generated new trial 11 with parameters {'x1': 0.865621, 'x2': 0.802471, 'x3': 0.174016, 'x4': 0.226507, 'x5': 0.820058, 'x6': 0.020903}.
[INFO 01-05 05:18:29] ax.service.ax_client: Completed trial 11 with data: {'hartmann6': (0.047456, 0.1), 'l2norm': (1.482524, 0.1)}.
[INFO 01-05 05:18:51] ax.service.ax_client: Generated new trial 12 with parameters {'x1': 0.285445, 'x2': 0.706535, 'x3': 0.699882, 'x4': 0.457108, 'x5': 0.509551, 'x6': 0.259973}.
[INFO 01-05 05:18:51] ax.service.ax_client: Completed trial 12 with data: {'hartmann6': (-1.172132, 0.1), 'l2norm': (1.339281, 0.1)}.
[INFO 01-05 05:19:08] ax.service.ax_client: Generated new trial 13 with parameters {'x1': 0.447629, 'x2': 0.635251, 'x3': 0.653527, 'x4': 0.373562, 'x5': 0.326768, 'x6': 0.148133}.
[INFO 01-05 05:19:08] ax.service.ax_client: Completed trial 13 with data: {'hartmann6': (-1.290785, 0.1), 'l2norm': (1.067711, 0.1)}.
[INFO 01-05 05:19:25] ax.service.ax_client: Generated new trial 14 with parameters {'x1': 0.411831, 'x2': 0.677768, 'x3': 0.726564, 'x4': 0.38483, 'x5': 0.409827, 'x6': 0.167276}.
[INFO 01-05 05:19:25] ax.service.ax_client: Completed trial 14 with data: {'hartmann6': (-1.486395, 0.1), 'l2norm': (1.197008, 0.1)}.
[INFO 01-05 05:19:55] ax.service.ax_client: Generated new trial 15 with parameters {'x1': 0.382696, 'x2': 0.774363, 'x3': 0.711538, 'x4': 0.350652, 'x5': 0.398975, 'x6': 0.116092}.
[INFO 01-05 05:19:55] ax.service.ax_client: Completed trial 15 with data: {'hartmann6': (-1.587761, 0.1), 'l2norm': (1.223754, 0.1)}.
[INFO 01-05 05:20:10] ax.service.ax_client: Generated new trial 16 with parameters {'x1': 0.417939, 'x2': 0.66576, 'x3': 0.780874, 'x4': 0.36393, 'x5': 0.395315, 'x6': 0.061474}.
[INFO 01-05 05:20:10] ax.service.ax_client: Completed trial 16 with data: {'hartmann6': (-1.276441, 0.1), 'l2norm': (1.268548, 0.1)}.
[INFO 01-05 05:20:28] ax.service.ax_client: Generated new trial 17 with parameters {'x1': 0.363619, 'x2': 0.706712, 'x3': 0.667815, 'x4': 0.269863, 'x5': 0.400997, 'x6': 0.157368}.
[INFO 01-05 05:20:28] ax.service.ax_client: Completed trial 17 with data: {'hartmann6': (-0.890432, 0.1), 'l2norm': (0.944596, 0.1)}.
[INFO 01-05 05:20:39] ax.service.ax_client: Generated new trial 18 with parameters {'x1': 0.400459, 'x2': 0.775116, 'x3': 0.694421, 'x4': 0.393481, 'x5': 0.416337, 'x6': 0.153949}.
[INFO 01-05 05:20:39] ax.service.ax_client: Completed trial 18 with data: {'hartmann6': (-1.735373, 0.1), 'l2norm': (1.323253, 0.1)}.
[INFO 01-05 05:20:56] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 0.364916, 'x2': 0.737, 'x3': 0.638414, 'x4': 0.392893, 'x5': 0.390419, 'x6': 0.11997}.
[INFO 01-05 05:20:56] ax.service.ax_client: Completed trial 19 with data: {'hartmann6': (-1.573689, 0.1), 'l2norm': (1.192496, 0.1)}.
The plot below shows the response surface for hartmann6 metric as a function of the x1, x2 parameters.
The other parameters are fixed in the middle of their respective ranges, which in this example is 0.5 for all of them.
# this could alternately be done with `ax.plot.contour.plot_contour`
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name='hartmann6'))
[INFO 01-05 05:20:56] ax.service.ax_client: Retrieving contour plot with parameter 'x1' on X-axis and 'x2' on Y-axis, for metric 'hartmann6'. Remaining parameters are affixed to the middle of their range.
The plot below allows toggling between different pairs of parameters to view the contours.
model = ax_client.generation_strategy.model
render(interact_contour(model=model, metric_name='hartmann6'))
This plot illustrates the tradeoffs achievable for 2 different metrics. The plot takes the x-axis metric as input (usually the objective) and allows toggling among all other metrics for the y-axis.
This is useful to get a sense of the pareto frontier (i.e. what is the best objective value achievable for different bounds on the constraint)
render(plot_objective_vs_constraints(model, 'hartmann6', rel=False))
CV plots are useful to check how well the model predictions calibrate against the actual measurements. If all points are close to the dashed line, then the model is a good predictor of the real data.
cv_results = cross_validate(model)
render(interact_cross_validation(cv_results))
Slice plots show the metric outcome as a function of one parameter while fixing the others. They serve a similar function as contour plots.
render(plot_slice(model, "x2", "hartmann6"))
Tile plots are useful for viewing the effect of each arm.
render(interact_fitted(model, rel=False))
Total runtime of script: 3 minutes, 0.89 seconds.